4,166 research outputs found

    Split-beam echosounder observations of natural methane seep variability in the northern Gulf of Mexico

    Get PDF
    A method for positioning and characterizing plumes of bubbles from marine gas seeps using an 18 kHz scientific split-beam echo sounder (SBES) was developed and applied to acoustic observations of plumes of presumed methane gas bubbles originating at approximately 1400 m depth in the northern Gulf of Mexico. A total of 161 plume observations from 27 repeat surveys were grouped by proximity into 35 clusters of gas vent positions on the seafloor. Profiles of acoustic target strength per vertical meter of plume height were calculated with compensation for both the SBES beam pattern and the geometry of plume ensonification. These profiles were used as indicators of the relative fluxes and fates of gas bubbles acoustically observable at 18 kHz and showed significant variability between repeat observations at time intervals of 1 h–7.5 months. Active gas venting was observed during approximately one third of the survey passes at each cluster. While gas flux is not estimated directly in this study owing to lack of bubble size distribution data, repeat surveys at active seep sites showed variations in acoustic response that suggest relative changes in gas flux of up to 1 order of magnitude over time scales of hours. The minimum depths of acoustic plume observations at 18 kHz averaged 875 m and frequently coincided with increased amplitudes of acoustic returns in layers of biological scatterers, suggesting acoustic masking of the gas bubble plumes in these layers. Minimum plume depth estimates were limited by the SBES field of view in only five instances

    Towards a Semantic Search Engine for Scientific Articles

    Full text link
    Because of the data deluge in scientific publication, finding relevant information is getting harder and harder for researchers and readers. Building an enhanced scientific search engine by taking semantic relations into account poses a great challenge. As a starting point, semantic relations between keywords from scientific articles could be extracted in order to classify articles. This might help later in the process of browsing and searching for content in a meaningful scientific way. Indeed, by connecting keywords, the context of the article can be extracted. This paper aims to provide ideas to build such a smart search engine and describes the initial contributions towards achieving such an ambitious goal

    Gaining Strength by Taking Power Away from Sacramento

    Get PDF

    Transfer learning for time series classification

    Full text link
    Transfer learning for deep neural networks is the process of first training a base network on a source dataset, and then transferring the learned features (the network's weights) to a second network to be trained on a target dataset. This idea has been shown to improve deep neural network's generalization capabilities in many computer vision tasks such as image recognition and object localization. Apart from these applications, deep Convolutional Neural Networks (CNNs) have also recently gained popularity in the Time Series Classification (TSC) community. However, unlike for image recognition problems, transfer learning techniques have not yet been investigated thoroughly for the TSC task. This is surprising as the accuracy of deep learning models for TSC could potentially be improved if the model is fine-tuned from a pre-trained neural network instead of training it from scratch. In this paper, we fill this gap by investigating how to transfer deep CNNs for the TSC task. To evaluate the potential of transfer learning, we performed extensive experiments using the UCR archive which is the largest publicly available TSC benchmark containing 85 datasets. For each dataset in the archive, we pre-trained a model and then fine-tuned it on the other datasets resulting in 7140 different deep neural networks. These experiments revealed that transfer learning can improve or degrade the model's predictions depending on the dataset used for transfer. Therefore, in an effort to predict the best source dataset for a given target dataset, we propose a new method relying on Dynamic Time Warping to measure inter-datasets similarities. We describe how our method can guide the transfer to choose the best source dataset leading to an improvement in accuracy on 71 out of 85 datasets.Comment: Accepted at IEEE International Conference on Big Data 201

    Adversarial Attacks on Deep Neural Networks for Time Series Classification

    Full text link
    Time Series Classification (TSC) problems are encountered in many real life data mining tasks ranging from medicine and security to human activity recognition and food safety. With the recent success of deep neural networks in various domains such as computer vision and natural language processing, researchers started adopting these techniques for solving time series data mining problems. However, to the best of our knowledge, no previous work has considered the vulnerability of deep learning models to adversarial time series examples, which could potentially make them unreliable in situations where the decision taken by the classifier is crucial such as in medicine and security. For computer vision problems, such attacks have been shown to be very easy to perform by altering the image and adding an imperceptible amount of noise to trick the network into wrongly classifying the input image. Following this line of work, we propose to leverage existing adversarial attack mechanisms to add a special noise to the input time series in order to decrease the network's confidence when classifying instances at test time. Our results reveal that current state-of-the-art deep learning time series classifiers are vulnerable to adversarial attacks which can have major consequences in multiple domains such as food safety and quality assurance.Comment: Accepted at IJCNN 201

    Deep learning for time series classification: a review

    Get PDF
    Time Series Classification (TSC) is an important and challenging problem in data mining. With the increase of time series data availability, hundreds of TSC algorithms have been proposed. Among these methods, only a few have considered Deep Neural Networks (DNNs) to perform this task. This is surprising as deep learning has seen very successful applications in the last years. DNNs have indeed revolutionized the field of computer vision especially with the advent of novel deeper architectures such as Residual and Convolutional Neural Networks. Apart from images, sequential data such as text and audio can also be processed with DNNs to reach state-of-the-art performance for document classification and speech recognition. In this article, we study the current state-of-the-art performance of deep learning algorithms for TSC by presenting an empirical study of the most recent DNN architectures for TSC. We give an overview of the most successful deep learning applications in various time series domains under a unified taxonomy of DNNs for TSC. We also provide an open source deep learning framework to the TSC community where we implemented each of the compared approaches and evaluated them on a univariate TSC benchmark (the UCR/UEA archive) and 12 multivariate time series datasets. By training 8,730 deep learning models on 97 time series datasets, we propose the most exhaustive study of DNNs for TSC to date.Comment: Accepted at Data Mining and Knowledge Discover
    • …
    corecore